Cloze任务是一种广泛使用的任务,可以评估NLP系统的语言理解能力。然而,大多数现有的渗透任务只需要NLP系统以提供每个输入数据样本的相对最佳预测,而不是在输入域中以一致的方式以一致的方式为所有可能的预测的绝对质量。因此,提出了一种新的任务:预测填充任务中的填充词是一个好的,中立或坏候选者。可以扩展复杂的版本以预测更多离散类或连续分数。我们专注于Semoval 2022任务7的子任务,探讨了一些可能的架构来解决这一新任务,提供了对它们的详细比较,并提出了一种在这项新任务中改进传统模型的集合方法。
translated by 谷歌翻译
This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译